Skip to content

EPCCed/archer2-AMPP-2023-09-06

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation






ARCHER2 Advanced MPI course (September 2023)

CC BY-NC-SA 4.0

David Henty EPCC: 6-7 September 2023 09:30 - 17:00 BST, online

This course is aimed at programmers seeking to deepen their understanding of MPI and explore some of its more recent and advanced features. We cover topics including exploiting shared-memory access from MPI programs, communicator management and advanced use of collectives. We also look at performance aspects such as which MPI routines to use for scalability, MPI internal implementation issues and overlapping communication and calculation. Intended learning outcomes

  • Understanding of how internal MPI implementation details affect performance
  • Techniques for overlapping communications and calculation
  • Familiarity with neighbourhood collective operations in MPI
  • Understanding of best practice for MPI+OpenMP programming
  • Knowledge of MPI memory models for RMA operations

Prerequisites

Attendees should be familiar with MPI programming in C, C++ or Fortran, e.g. have attended the ARCHER2 MPI course.

Requirements

Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on.

They are also required to abide by the ARCHER2 Code of Conduct.

Timetable (all times are in British Summer Time)

Although the start and end times will be as indicated below, this is a draft timetable based on a previous run of the course and the details may change for this run.

Unless otherwise indicated all material is Copyright © EPCC, The University of Edinburgh, and is only made available for private study.

Day 1: Wednesday 6th September

Day 2: Thursday 7th September

Exercise Material

Unless otherwise indicated all material is Copyright © EPCC, The University of Edinburgh, and is only made available for private study.

Day 1

SLURM batch scripts are set to run in the short queue and should work any time. However, on days when the course is running, we have special reserved queues to guarantee fast turnaround.

The reserved queue for today is called ta117_1002288. To use this queue, change the --qos and --reservation lines to:

#SBATCH --qos=reservation
#SBATCH --reservation=ta117_1002288
  • Ping-pong exercise sheet

  • Ping-pong source code

  • Description of 3D halo-swapping benchmark is in this README

  • Download the code directly to ARCHER2 using: git clone https://github.com/davidhenty/halobench

    • compile with make -f makefile-archer2
    • submit with sbatch archer2.job
  • Other things you could do with the halo swapping benchmark:

    • change the buffer size to be very small ( a few tens of bytes) or very large (bigger than the eager limit) to see if that affects the results;
    • run on different numbers of nodes.
  • Note that you will need to change the number of repetitions to get reasonable runtimes: many more for smaller messages, many fewer for larger messages. Each test needs to run for at least a few seconds to give reliable results.

  • The halobench program contains an example of using MPI_Neighbor_alltoall() to do pairwise swaps of data between neighbouring processes in a regular 3D grd

  • Tomorrows traffic modelling problem sheet also contains a final MPI exercise in Section 3 to replace point-to-point boundary swapping with neighbourhood collectives.

Day 2

The reserved queue for today is called ta117_1002291. To use this queue, change the --qos and --reservation lines to:

#SBATCH --qos=reservation
#SBATCH --reservation=ta117_1002291

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

About

Materials for ARCHER2 Advanced MPI course September 2023

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published